Online personalized recommendation services are generally hosted in the cloud where users query the cloud-based model to receive recommended input such as merchandise of interest or news feed. State-of-the-art recommendation models rely on sparse and dense features to represent users' profile information and the items they interact with. Although sparse features account for 99% of the total model size, there was not enough attention paid to the potential information leakage through sparse features. These sparse features are employed to track users' behavior, e.g., their click history, object interactions, etc., potentially carrying each user's private information. Sparse features are represented as learned embedding vectors that are stored in large tables, and personalized recommendation is performed by using a specific user's sparse feature to index through the tables. Even with recently-proposed methods that hides the computation happening in the cloud, an attacker in the cloud may be able to still track the access patterns to the embedding tables. This paper explores the private information that may be learned by tracking a recommendation model's sparse feature access patterns. We first characterize the types of attacks that can be carried out on sparse features in recommendation models in an untrusted cloud, followed by a demonstration of how each of these attacks leads to extracting users' private information or tracking users by their behavior over time.
translated by 谷歌翻译
Classifiers in supervised learning have various security and privacy issues, e.g., 1) data poisoning attacks, backdoor attacks, and adversarial examples on the security side as well as 2) inference attacks and the right to be forgotten for the training data on the privacy side. Various secure and privacy-preserving supervised learning algorithms with formal guarantees have been proposed to address these issues. However, they suffer from various limitations such as accuracy loss, small certified security guarantees, and/or inefficiency. Self-supervised learning is an emerging technique to pre-train encoders using unlabeled data. Given a pre-trained encoder as a feature extractor, supervised learning can train a simple yet accurate classifier using a small amount of labeled training data. In this work, we perform the first systematic, principled measurement study to understand whether and when a pre-trained encoder can address the limitations of secure or privacy-preserving supervised learning algorithms. Our key findings are that a pre-trained encoder substantially improves 1) both accuracy under no attacks and certified security guarantees against data poisoning and backdoor attacks of state-of-the-art secure learning algorithms (i.e., bagging and KNN), 2) certified security guarantees of randomized smoothing against adversarial examples without sacrificing its accuracy under no attacks, 3) accuracy of differentially private classifiers, and 4) accuracy and/or efficiency of exact machine unlearning.
translated by 谷歌翻译
In recent years, object detection has achieved a very large performance improvement, but the detection result of small objects is still not very satisfactory. This work proposes a strategy based on feature fusion and dilated convolution that employs dilated convolution to broaden the receptive field of feature maps at various scales in order to address this issue. On the one hand, it can improve the detection accuracy of larger objects. On the other hand, it provides more contextual information for small objects, which is beneficial to improving the detection accuracy of small objects. The shallow semantic information of small objects is obtained by filtering out the noise in the feature map, and the feature information of more small objects is preserved by using multi-scale fusion feature module and attention mechanism. The fusion of these shallow feature information and deep semantic information can generate richer feature maps for small object detection. Experiments show that this method can have higher accuracy than the traditional YOLOv3 network in the detection of small objects and occluded objects. In addition, we achieve 32.8\% Mean Average Precision on the detection of small objects on MS COCO2017 test set. For 640*640 input, this method has 88.76\% mAP on the PASCAL VOC2012 dataset.
translated by 谷歌翻译
Error correction in automatic speech recognition (ASR) aims to correct those incorrect words in sentences generated by ASR models. Since recent ASR models usually have low word error rate (WER), to avoid affecting originally correct tokens, error correction models should only modify incorrect words, and therefore detecting incorrect words is important for error correction. Previous works on error correction either implicitly detect error words through target-source attention or CTC (connectionist temporal classification) loss, or explicitly locate specific deletion/substitution/insertion errors. However, implicit error detection does not provide clear signal about which tokens are incorrect and explicit error detection suffers from low detection accuracy. In this paper, we propose SoftCorrect with a soft error detection mechanism to avoid the limitations of both explicit and implicit error detection. Specifically, we first detect whether a token is correct or not through a probability produced by a dedicatedly designed language model, and then design a constrained CTC loss that only duplicates the detected incorrect tokens to let the decoder focus on the correction of error tokens. Compared with implicit error detection with CTC loss, SoftCorrect provides explicit signal about which words are incorrect and thus does not need to duplicate every token but only incorrect tokens; compared with explicit error detection, SoftCorrect does not detect specific deletion/substitution/insertion errors but just leaves it to CTC loss. Experiments on AISHELL-1 and Aidatatang datasets show that SoftCorrect achieves 26.1% and 9.4% CER reduction respectively, outperforming previous works by a large margin, while still enjoying fast speed of parallel generation.
translated by 谷歌翻译
当系统中有某些未知术语和隐藏的物理机制时,基于第一原理的复杂物理系统的管理方程可能会非常具有挑战性。在这项工作中,我们采用深度学习体系结构来学习基于从完全动力学模型中获取的数据的等离子体系统的流体部分微分方程(PDE)。证明了学到的多臂流体PDE可以融合诸如Landau阻尼等动力学效应。基于学习的流体闭合,数据驱动的多音阶流体建模可以很好地再现从完全动力学模型中得出的所有物理量。Landau阻尼的计算阻尼率与完全动力学的模拟和线性理论一致。用于复杂物理系统的PDE的数据驱动的流体建模可以应用于改善流体闭合并降低全球系统多规模建模的计算成本。
translated by 谷歌翻译
许多实际问题可以作为两种几何模式之间的对齐方式提出。以前,大量研究集中于计算机视觉领域中2D或3D模式的对齐。最近,高维度的对齐问题在实践中发现了一些新的应用。但是,该研究在算法方面仍然相当有限。据我们所知,大多数现有的方法只是对2D和3D案例的简单扩展,并且经常遭受诸如高计算复杂性之类的问题。在本文中,我们提出了一个有效的框架来压缩高维几何模式。任何现有的比对方法都可以应用于压缩的几何模式,并且可以大大降低时间复杂性。我们的想法的灵感来自观察到高维数据通常具有较低的内在维度。我们的框架是一种“数据依赖性”方法,其复杂性取决于输入数据的内在维度。我们的实验结果表明,与原始模式的结果相比,在压缩模式上运行对齐算法可以达到相似的质量,但是运行时间(包括压缩的时间成本)大大降低。
translated by 谷歌翻译
开放信息提取是一个重要的NLP任务,它针对从非结构化文本中提取结构化信息的目标,而无需限制关系类型或文本域。该调查文件涵盖了2007年至2022年的开放信息提取技术,重点是以前的调查未涵盖的新模型。我们从信息角度来源提出了一种新的分类方法,以适应最近的OIE技术的开发。此外,我们根据任务设置以及当前流行的数据集和模型评估指标总结了三种主要方法。鉴于全面的审查,从数据集,信息来源,输出表格,方法和评估指标方面显示了几个未来的方向。
translated by 谷歌翻译
作为一个新兴的安全学习范式,在利用跨机构私人数据中,垂直联合学习(VFL)有望通过启用广告商和发布者私人拥有的补充用户属性的联合学习来改善广告模型。但是,将其应用于广告系统有两个关键的挑战:a)标记的重叠样本的有限规模,b)实时跨机构服务的高成本。在本文中,我们提出了一个半监督的拆卸框架VFED-SSD,以减轻这两个限制。我们确定:i)广告系统中有大量未标记的重叠数据,ii)我们可以通过分解联合模型来保持模型性能和推理成本之间的平衡。具体而言,我们开发了一个自制任务匹配的配对检测(MPD),以利用垂直分区的未标记数据并提出拆分知识蒸馏(SplitKD)架构,以避免跨机构服务。对三个工业数据集的实证研究表现出我们方法的有效性,在本地部署模式和联合部署模式下,所有数据集的中位数AUC分别提高了0.86%和2.6%。总体而言,我们的框架为实时展示广告提供了一种有效的联邦增强解决方案,其部署成本和大量绩效提升。
translated by 谷歌翻译
现有的基于深度学习的变更检测方法试图精心设计具有功能强大特征表示的复杂神经网络,但忽略了随时间变化的土地覆盖变化引起的通用域转移,包括亮度波动和事件前和事后图像之间的季节变化,从而产生亚最佳结果。在本文中,我们提出了一个端到端监督域的适应框架,用于跨域变更检测,即SDACD,以有效地减轻双期颞图像之间的域移位,以更好地变更预测。具体而言,我们的SDACD通过有监督的学习从图像和特征角度介绍了合作改编。图像适应性利用了具有循环矛盾的限制来利用生成的对抗学习,以执行跨域样式转换,从而有效地以两边的方式缩小了域间隙。为了特征适应性,我们提取域不变特征以对齐特征空间中的不同特征分布,这可以进一步减少跨域图像的域间隙。为了进一步提高性能,我们结合了三种类型的双颞图像,以进行最终变化预测,包括初始输入双期图像和两个来自事件前和事后域的生成的双颞图像。对两个基准的广泛实验和分析证明了我们提出的框架的有效性和普遍性。值得注意的是,我们的框架将几个代表性的基线模型推向了新的最先进的记录,分别在CDD和WHU建筑数据集上分别达到97.34%和92.36%。源代码和模型可在https://github.com/perfect-you/sdacd上公开获得。
translated by 谷歌翻译
作为第三代神经网络,尖峰神经网络(SNN)在神经形态硬件上具有很大的潜力,因为它们的能效高。然而,由于二进制输出和峰值函数的非差异性能,深层尖峰增强学习(DSRL),即基于SNN的加固学习(RL)仍处于初步阶段。为了解决这些问题,我们在本文中提出了深层尖峰Q-Network(DSQN)。具体而言,我们提出了基于泄漏的集成和火(LIF)神经元和深Q-NETWORK(DQN)的直接训练的深尖峰增强式学习体系结构。然后,我们为深尖峰Q网络调整了直接的尖峰学习算法。我们进一步证明了在理论上使用DSQN中使用LIF神经元的优势。已经对17场表现最佳的Atari游戏进行了全面的实验,以将我们的方法与最先进的转换方法进行比较。实验结果证明了我们方法在性能,稳定性,鲁棒性和能源效率方面的优势。据我们所知,我们的工作是第一个通过直接训练的SNN在多个Atari游戏中实现最先进的性能的工作。
translated by 谷歌翻译